31 research outputs found

    Optimistic Variants of Single-Objective Bilevel Optimization for Evolutionary Algorithms

    Get PDF
    Single-objective bilevel optimization is a specialized form of constraint optimization problems where one of the constraints is an optimization problem itself. These problems are typically non-convex and strongly NP-Hard. Recently, there has been an increased interest from the evolutionary computation community to model bilevel problems due to its applicability in real-world applications for decision-making problems. In this work, a partial nested evolutionary approach with a local heuristic search has been proposed to solve the benchmark problems and have outstanding results. This approach relies on the concept of intermarriage-crossover in search of feasible regions by exploiting information from the constraints. A new variant has also been proposed to the commonly used convergence approaches, i.e., optimistic and pessimistic. It is called an extreme optimistic approach. The experimental results demonstrate the algorithm converges differently to known optimum solutions with the optimistic variants. Optimistic approach also outperforms pessimistic approach. Comparative statistical analysis of our approach with other recently published partial to complete evolutionary approaches demonstrates very competitive results

    Guided Stochastic Gradient Descent Algorithm for inconsistent datasets

    Get PDF
    Stochastic Gradient Descent (SGD) Algorithm, despite its simplicity, is considered an effective and default standard optimization algorithm for machine learning classification models such as neural networks and logistic regression. However, SGD's gradient descent is biased towards the random selection of a data instance. In this paper, it has been termed as data inconsistency. The proposed variation of SGD, Guided Stochastic Gradient Descent (GSGD) Algorithm, tries to overcome this inconsistency in a given dataset through greedy selection of consistent data instances for gradient descent. The empirical test results show the efficacy of the method. Moreover, GSGD has also been incorporated and tested with other popular variations of SGD, such as Adam, Adagrad and Momentum. The guided search with GSGD achieves better convergence and classification accuracy in a limited time budget than its original counterpart of canonical and other variation of SGD. Additionally, it maintains the same efficiency when experimented on medical benchmark datasets with logistic regression for classification

    SMOTified-GAN for class imbalanced pattern classification problems

    Get PDF
    Class imbalance in a dataset is a major problem for classifiers that results in poor prediction with a high true positive rate (TPR) but a low true negative rate (TNR) for a majority positive training dataset. Generally, the pre-processing technique of oversampling of minority class(es) are used to overcome this deficiency. Our focus is on using the hybridization of Generative Adversarial Network (GAN) and Synthetic Minority Over-Sampling Technique (SMOTE) to address class imbalanced problems. We propose a novel two-phase oversampling approach involving knowledge transfer that has the synergy of SMOTE and GAN. The unrealistic or overgeneralized samples of SMOTE are transformed into realistic distribution of data by GAN where there is not enough minority class data available for GAN to process them by itself effectively. We named it SMOTified-GAN as GAN works on pre-sampled minority data produced by SMOTE rather than randomly generating the samples itself. The experimental results prove the sample quality of minority class(es) has been improved in a variety of tested benchmark datasets. Its performance is improved by up to 9\% from the next best algorithm tested on F1-score measurements. Its time complexity is also reasonable which is around O(N2d2T) for a sequential algorithm

    Classification with 2-D convolutional neural networks for breast cancer diagnosis

    Get PDF
    Breast cancer is the most common cancer in women. Classification of cancer/non-cancer patients with clinical records requires high sensitivity and specificity for an acceptable diagnosis test. The state-of-the-art classification model—convolutional neural network (CNN), however, cannot be used with such kind of tabular clinical data that are represented in 1-D format. CNN has been designed to work on a set of 2-D matrices whose elements show some correlation with neighboring elements such as in image data. Conversely, the data examples represented as a set of 1-D vectors—apart from the time series data—cannot be used with CNN, but with other classification models such as Recurrent Neural Networks for tabular data or Random Forest. We have proposed three novel preprocessing methods of data wrangling that transform a 1-D data vector, to a 2-D graphical image with appropriate correlations among the fields to be processed on CNN. We tested our methods on Wisconsin Original Breast Cancer (WBC) and Wisconsin Diagnostic Breast Cancer (WDBC) datasets. To our knowledge, this work is novel on non-image tabular data to image data transformation for the non-time series data. The transformed data processed with CNN using VGGnet-16 shows competitive results for the WBC dataset and outperforms other known methods for the WDBC dataset

    On the relationship of degree of separability with depth of evolution in decomposition for cooperative coevolution

    Get PDF
    Problem decomposition determines how subcomponents are created that have a vital role in the performance of cooperative coevolution. Cooperative coevolution naturally appeals to fully separable problems that have low interaction amongst subcomponents. The interaction amongst subcomponents is defined by the degree of separability. Typically, in cooperative coevolution, each subcomponent is implemented as a sub-population that is evolved in a round-robin fashion for a specified depth of evolution. This paper examines the relationship between the depth of evolution and degree of separability for different types of global optimisation problems. The results show that the depth of evolution is an important attribute that affects the performance of cooperative coevolution and can be used to ascertain the nature of the problem in terms of the degree of separability

    A guided neural network approach to predict early readmission of diabetic patients

    Get PDF
    Diabetes is a major chronic health problem affecting millions globally. Effective diabetes management can reduce the risk of hospital readmission and the associated financial losses for both the healthcare system and insurance companies. Hospital readmission is a high-priority healthcare quality measure that reflects the inadequacies in the healthcare system that also increase healthcare costs and negatively influence hospitals’ reputation. Predicting readmissions in the early stages prompts great attention to patients with a high risk of readmission. There has been some attempt in applying machine learning predictive models such as ensemble learning with Extreme Gradient Boosting (XGBoost), Support Vector Machine (SVM) and Artificial Neural Networks (ANN) to correctly identify if the readmission can happen within 30 days (< 30 days) or it may never happen or happens after 30 days ( ≥30 days). We are proposing a new method that is applied to ANN to guide it through its gradient descent optimizers by realizing consistent vs inconsistent data in every batch. Our results show that there are up to 1.5% improvement in classification accuracies in both 2-class and 3-class variations of the experimented benchmark dataset when using the guided optimizer to train the ANN as opposed to the standard optimizer. Guided ANN is also able to achieve better error convergence than standard ANN

    Guided parallelized stochastic gradient descent for delay compensation

    No full text
    Stochastic gradient descent (SGD) algorithm and its variations have been effectively used to optimize neural network models. However, with the rapid growth of big data and deep learning, SGD is no longer the most suitable choice due to its natural behavior of sequential optimization of the error function. This has led to the development of parallel SGD algorithms, such as asynchronous SGD (ASGD) and synchronous SGD (SSGD) to train deep neural networks. However, it introduces a high variance due to the delay in parameter (weight) update. We address this delay in our proposed algorithm and try to minimize its impact. We employed guided SGD (gSGD) that encourages consistent examples to steer the convergence by compensating the unpredictable deviation caused by the delay. Its convergence rate is also similar to A/SSGD, however, some additional (parallel) processing is required to compensate for the delay. The experimental results demonstrate that our proposed approach has been able to mitigate the impact of delay for the quality of classification accuracy. The guided approach with SSGD clearly outperforms sequential SGD and even achieves an accuracy close to sequential SGD for some benchmark datasets
    corecore